Evolvable Hardware or Learning Hardware? Induction of State Machines from Temporal Logic Constraints
نویسندگان
چکیده
Here we advocate an approach to learning hardware based on induction of finite state machines from temporal logic constraints. The method involves training on examples, constraints solving, determinization, state machine minimization, structural mapping, functional decomposition of multi-valued logic functions and relations, and finally, FPGA mapping. In our approach, learning takes place on the level of constraint acquisition and functional decomposition rather than on the lower level of programming binary switches. Our learning strategy is based on the principle of Occam's Razor, facilitating generalization and discovery. We implemented several learning algorithms using DEC-PERLE-1 FPGA board. 1 Evolving in hardware versus learning in hardware In recent years the scientific community has witnessed rapid developments in the area of Soft Computing. These approaches include Artificial Neural Nets (ANNs), Cellular Neural Nets (CNNs), Fuzzy Logic, Rough Sets, Genetic Algorithms (GAs), Genetic and Evolutionary Programming. Several mixed approaches have also been created. In different ways, they combine elements of these areas with the goal of solving complex and poorly defined problems that could not be tackled by earlier, analytic models. What is common to all these approaches is that they propose a way of automatic learning by the system. The computer is taught by examples rather than completely programmed (instructed) in what to do. This philosophy also dominates areas of Artificial Life, solving problems by analogy to nature, decision making, knowledge acquisition, and new approaches to intelligent robotics. Machine Learning thus becomes a new and general system design paradigm unifying these previously disconnected research areas. It starts to become a new hardware construction paradigm as well. Recently, the term Evolvable Hardware (EHW) has been coined [15] which means the realization of genetic algorithm (GA) in reconfigurable hardware. It is exemplified by Brain Builder CBM [15]. The EHW approach to computing has raised considerable interest and enthusiasm among some researchers, but scepticism among others. One may ask: "Why genetic algorithm"? Our experience prompts us to question the usefulness of GA as a sole learning method to reconfigure binary FPGAs. Instead, we propose the Learning Hardware approach, which consists in using feedback from the environment (for instance, positive and negative examples from the trainer) to create a sequential network and subsequently realizing this network in FPGAs. Our approach of Universal Logic Machine [35, 40, 38, 24, 37, 45] proposes the creation of a learning machine based on logic principles, in particular, temporal logic [32, 4, 5, 6, 7], constructive induction [2, 11, 27, 28], and rough set theory [34]. Our software algorithms require fast operations on complex logic expressions and the ability to solve NPcomplete problems such as satisfiability. They should be realized in hardware to obtain the necessary speed-ups. Using a fast prototyping tool, the DEC-PERLE-1 board based on an array of Xilinx FPGAs, we are developing software/configware processors that accelerate the acquisition, synthesis, and optimization of Reactive State Machines. While GA is a simple and practically blind mechanism of Nature, it can be easily realizable in hardware. We believe that this mechanism alone cannot produce good results. (Although it is relatively easy to do crossover and mutation in hardware, the fitness function evaluation is difficult). In contrast, logic algorithms that draw upon human knowledge are optimal and mathematically sophisticated. They lead to high quality learning results: knowledge generalization, discovery, no overfitting, small learning errors [47, 1, 26, 20, 21]. Their software realizations, however, use such complex data structures and controls that it is difficult to realize them in hardware. When we refer to Learning Hardware, we define the term "learning" very broadly, as any mechanism that leads to the improvement of operation; evolution-based learning is therefore included. Although specific learning concepts and their formalities differ from one learning approach to another, what is common is that, in the process of learning, a network (combinational or sequential) is constructed that stores the knowledge acquired in the learning phase. The learned network is next run on old or new data. Responses may be correct or erroneous. The network's behavior is then evaluated by some fitness (cost) functions and the learning and running phases are alternated. The process of solving problems consists of two phases: the phase of learning, which involves constructing and tuning the network, and the phase of acting. The second stage means using knowledge, that is, running the network for data sets. Compared to the process of developing and using a computer, the first stage can be likened to the entire process of conceptualizing, designing, and optimizing a computer, and the second stage to using this computer to perform calculations. You cannot redesign standard computer hardware, however, when it cannot solve a problem correctly. The Learning Hardware will redesign itself automatically using the new learning examples provided to it. 2 Logic rather than evolutionary methods for learning Our ULM approach is based on FPGA technology and associated logic development methods (called Logic Synthesis by the design automation community and Constructive Induction by the Machine Learning community) rather than neural or genetic algorithms. Michie [29] makes a distinction between black-box and knowledge-oriented concept learning systems by introducing concepts of weak and strong criteria. The system satisfies a weak criterium if it uses sample data to generate an updated basis for improved performance on subsequent data. A strong criterion is satisfied if the system communicates concepts learned in symbolic form [28]. Let us observe that ANNs, CNNs, and similar approaches satisfy only the weak criterium while our approach satisfies the strong criterium. We believe that the results of the learning process, and even the process itself, should be rational. They should be similar to those of teaching humans, based on symbolic logic and not on the methods of Nature. Human thinking consists in abstract use of symbols, rather than assigning numeric weights to neurons. Our approach operates on higher and more natural symbolic representation levels. The built-in mathematical optimization techniques (such as graph coloring or satisfiability) support the principle of Occam's Razor, offering solutions that are provably good in the sense of Computational Learning Theory (COLT) [1. 47]. Thus, learning on a symbolic level is the first main point of our approach. In our past research, we have used and compared in software, various network structures for learning: two-level AND/OR (Sum-of-Products (SOP), or Disjunctive-NormalForms (DNF)) [33], decision trees (C4.5), and multi-level decomposition structures [20, 21, 36, 53, 44], as well as various logic, non-logic and mixed optimization methods: search [37], rule-based, set-covering, graph-coloring, genetic algorithm [16, 18] (including mixtures of logic and GA approaches), genetic programming [17], artificial neural nets, and simulated annealing. We compared the resulting complexity of our networks (Occam's Razor), as well as various ways of controlling the number of errors in the learning process [20, 21, 26]. The Decomposed Function Cardinality (DFC) and its extensions for MV logic [1, 20, 21, 44] were used as common measures of complexity, because of their strong theoretically proven properties [1, 47]. Our conclusion, based on these investigations, is that logic approaches and especially the MV decomposition techniques, combined with smart heuristic strategies and good data representations, are usually superior to other approaches due to smaller net complexity and fewer learning errors. In our experience, especially poor results are obtained using genetic algorithms [16, 17, 18]. GA may perform well in other applications, but from both our experience and the literature we could not find a single problem domain in which a GA-based algorithm was superior to a hand-crafted algorithm in the design of a binary or multivalued logic network. This is perhaps because researchers have long experience in creating efficient logic minimization algorithms (for instance, more papers have been written on SOP minimization than perhaps on any other engineering topic). In our approach we want to make use of this accumulated human experience, rather than to "reinvent" algorithms using GA. 3 Learning hardware approach in universal logic machines Developers of evolvable and learning systems agree that, realized with current software or even parallel programming technologies, the learning phase and/or the execution phase are too slow for real-life problems, especially real-time problems. The situation is essentially the same regardless of whether the exhaustive combinatorial search, simulated annealing, or evolutionary algorithms that involve millions of populations are used. Thus, the researchers proposed to speed-up some phases by migrating from software to hardware. Many ambitious projects based on ANNs, cellular logic, DNA, simulated evolution and biologically motivated hardware have been proposed that will perhaps be successful in the future, when realized on molecular or quantum levels. However, many of them are quite impractical in current technologies. Most of the approaches to evolvable hardware use binary Field Programmable Gate Arrays, because now there is simply no other mass-scale hardware reconfigurable (reprogrammable) and relatively inexpensive technology widely available. Since in binary FPGAs everything is realized on the level of binary logic gates and flip-flops, in our opinion, the learning process should be performed on this level also. Thus, learning on the level of logic gates and flip-flops is the second main point of our approach. We believe that the learning level of sequential logic nets is more natural than the higher level of arithmetic operations of ANNs or Fuzzy Logic functions, or the lower level of routing FPGA connection paths. Once we decide to realize the network using logic gates in FPGA, we should apply efficient logic design algorithms and realize them in hardware for speedup. We believe also that all methods that exist in VLSI design, and especially, the powerful EDA (Electronic Design Automation) tools, should be reused in their entirety, rather than duplicated by naive lowlevel evolutionary algorithms. Engineers spent many years developing such tools in the area of digital design automation; especially for reconfigurable computers: state machines, logic synthesis, technology mapping, placement and routing, partitioning, timing analysis, etc. Their use will facilitate creation of Learning Hardware. Occam Razor principle should also be used because only it leads to meaningful discoveries and explainable results. In conclusion, we do not believe that the "purist strategies" for evolutionary hardware are practically acceptable for most commercial applications of Learning Hardware. Therefore, we propose the concept of Learning Hardware based on previous human problem-solving experience and application of mathematical algorithms and problem-solving strategies rather than relying on two basic methods to Evolvable Hardware: ANNs and GA. Learning/evolution will remain as the main principle of building new generation hardware, but it should be restricted to higher, abstract levels rather than lowest level FPGA resources. The variant evaluation/selection should also be performed on abstract levels, before mapping to low-level field-programmable resources, for which chromosomes are extremely long and the GA is very inefficient. The proposed ULM approach to Learning Hardware can be summarized as follows: 1. Based on sets of examples specified in our input language L, we create a Reactive State Machine (RSM), in particular, a (combinational) function or a relation with no temporal variables. The description consists in input-output specification, initial state specifications and global environment constraints. This machine is usually non-deterministic, but is stateminimal from construction [4, 5, 6, 7]. with respect to all its variables as state variables. 2. The machine can be determinized (converted from non-deterministic to deterministic form). Next the machine is state-minimized (with respect to the new set of state variables, which are a subset of initial input/output variables). We plan also to develop a method to get a state-minimal determinization right away. 3. The machine is mapped to constrained structural resources which we call Regular Automata (RA). It assumes some regular structure, such as a counter, shift register, cellular automaton, or any localconnection based sequential network, and checks for the maximum isomorphism between the autonomous state transition graph of this network and the RSM graph. (The new methods of mapping a state machine to a counter or a shift register are both part of our methodology). 4. The time-based MV logic expressions of Regular Automata are decomposed. We use the algorithm which is our generalization of the functional Ashenhurst-Curtis decomposition [36, 44]. The timed variables, and the multi-valued variables, are converted to new binary variables. 5. The (quasi)optimally constructed network is logically mapped to standard FPGA CLBs and realized using standard partitioning, placement, and routing with the help of EDA tools from Xilinx or other companies. Thus, each RSM is converted to a binary pattern of programming switches in FPGA. 6. The knowledge of the machine is stored in binary memory patterns representing final FPGA reconfigurable information. Under supervision of the software program, the hardware switches between a number of evolved circuits, depending on rules that can also be acquired automatically. This phase is therefore similar to the CBM approach of DeGaris. 7. As the network solves new problems, the new data sets and training decisions are accumulated and the network is repeatedly redesigned. An old network can serve as a redesign plan for a new network, or the latter is "designed from scratch" to avoid any bias. Thus, we replace the process of creating high level behaviors by evolving on low level used in EHW, with the ULM model of learning at high level and next compiling to low level using standard FPGA-based tools. Observe also that the same physical FPGA resources are multiplexed to implement the virtual human-designed learning hardware and the automatically learned data hardware. While the "learning hardware" is designed once and cannot be changed, the "data hardware" can be modified indefinitely. 4 Induction of reactive state machines from temporal logic constraints Because decompositions of combinational logic have been already explained in detail in our previous papers [20, 21, 26, 32], here we concentrate on Reactive State Machines. As explained in the previous section, when the expression (set of examples) in the input language has been created, the problem is closed and becomes that of designing the minimal state machine using regular binary resources of lookup-tables and flip-flops. Our state machine design integrates methods developed in USA and former USSR. We introduce new efficient and practical methods for specification and synthesis of state machines [4, 5, 6, 7, 32] that can significantly improve the solutions currently used in the United States. Developed in the major computer centers of the former Soviet Union, these methods were never published in English or presented to American audience. Theoretically, any FSM-based EDA tools can be used to solve these tasks. There are, however, some specifics of our approach, which make it different in practice. These differences are related to two issues: (1) The use of temporal logic as the input specification. (2) The use of Regular Automata for structural design.
منابع مشابه
On Feasibility of Adaptive Level Hardware Evolution for Emergent Fault Tolerant Communication
A permanent physical fault in communication lines usually leads to a failure. The feasibility of evolution of a self organized communication is studied in this paper to defeat this problem. In this case a communication protocol may emerge between blocks and also can adapt itself to environmental changes like physical faults and defects. In spite of faults, blocks may continue to function since ...
متن کاملAdaptive hardware/evolvable hardware - The state of the art and the prospectus for future development
With great pleasure, we would like to welcome you to this special issue of the International Journal of Knowledge-based and Intelligent Engineering Systems. The motivation of its selected topic – Adaptive Hardware/Evolvable Hardware – the State of the Art and the Prospectus for Future Development – is deeply justified. Nevertheless, it is hardware implementation of the most benefit for the soci...
متن کاملDevelopment of hardware system using temperature and vibration maintenance models integration concepts for conventional machines monitoring: a case study
This article describes the integration of temperature and vibration models for maintenance monitoring of conventional machinery parts in which their optimal and best functionalities are affected by abnormal changes in temperature and vibration values thereby resulting in machine failures, machines breakdown, poor quality of products, inability to meeting customers’ demand, poor inventory contro...
متن کاملAn Analog Evolvable Hardware Device for Active Control
Vigraham, Saranyan . PhD., Computer Science and Engineering Ph.D Program, Department of Computer Science and Engineering, Wright State University, 2007 . An Analog Evolvable Hardware Device for Active Control. The field of Evolvable Hardware (EH) has recently gained a lot of interest due to the novel methodology it offers for designing electrical circuits and machines. EH techniques involve con...
متن کاملMealy Finite State Machines: an Evolutionary Approach
Synchronous finite state machines are very important for digital sequential designs. Among other important aspects, they represent a powerful way for synchronising hardware components so that these components may cooperate adequately in the fulfillment of the main objective of the hardware design. In this paper, we propose an evolutionary methodology synthesis finite state machines. First, we o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1999